Source: All these data sets are made up of data from the US government. https://www.cia.gov/library/publications/the-world-factbook/docs/faqs.html
TASK: Run the following cells to import libraries and read in data.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_csv('inp_files/CIA_Country_Facts.csv')
TASK: Explore the rows and columns of the data as well as the data types of the columns.
df.head()
df.info()
df.describe().transpose()
Let's create some visualizations. Please feel free to expand on these with your own analysis and charts!
TASK: Create a histogram of the Population column.
sns.histplot(data=df,x='Population')
plt.show() # Kinija ir indija - 2 maži taškiukai dešinėj padaro zoom out efektą
TASK: You should notice the histogram is skewed due to a few large countries, reset the X axis to only show countries with less than 0.5 billion people
# Billion = million + 000
# so half a billion is (5000000 00) * 2 = billion
sns.histplot(data=df[df['Population']<500000000],x='Population')
plt.show()
# grafike rodo nuo 0 iki 3 * 10^8 kas yra trečdalis biliono
TASK: Now let's explore GDP and Regions. Create a bar chart showing the mean GDP per Capita per region (recall the black bar represents std).
sns.barplot(data=df,x='Region',y='GDP ($ per capita)')
plt.xticks(rotation=90)
plt.show()
# matome regionų turtingumą. Vakarų europą ir šiaurės ameriką yra turtingiausi/produktiviausi regionai
# juoda linija NA regione rodo standart deviation, ilga t.y. didelė, nes Jav gdp per capita yra žymiai didenė
# negu kanados ir meksikos. Realiai parodo, kad turtingumo išsibarstymas didelis tame regione.
TASK: Create a scatterplot showing the relationship between Phones per 1000 people and the GDP per Capita. Color these points by Region.
plt.figure(figsize=(10,6),dpi=200)
sns.scatterplot(data=df,x='GDP ($ per capita)',y='Phones (per 1000)', hue='Region')
plt.legend(loc=(1.05,0.5)) # nurodom (x,y) reikšmes kur padėt legendą
plt.show()
# matome 2 outliers žalius. Pačiam viršuj virš 1000, nors stulpelis yra per 1000 žmonių
# reiškia daugiau telefonų negu žmonių toje šalyje. Taip pat dešinėje, labai didelis GDP per capita
# bet telefonų palyginti su kitom šalim mažai. Paprastai didėjant GDP per capita galvotum, kad telefonų
# kiekis šalyje irgi turėtų proporcingai didėti.
# Toliau pažiūrim kas čia per šalys.
df[df['GDP ($ per capita)']>50000] # labai turtinga šalis ir labai mažai gyventojų. Dėl to išskirtis šitu atveju
df[df['Phones (per 1000)']>1000] # 1035 telefonai per 1000 žmonių
TASK: Create a scatterplot showing the relationship between GDP per Capita and Literacy (color the points by Region). What conclusions do you draw from this plot?
plt.figure(figsize=(10,6),dpi=200)
sns.scatterplot(data=df,x='GDP ($ per capita)',y='Literacy (%)', hue='Region')
plt.show()
# nėra tiesinės priklausomybės kaip galvotum.
# realiai parodo, kad turint žemą raštingumą labai tikėtina, kad šalis neturtinga, bet jeigu šalis neturtinga
# nereiškia, kad ten žemas raštingumas. Taip pat, jeigu GDP > 1000 tai praktiškai garantuota, kad labai
# raštinga šalis.
TASK: Create a Heatmap of the Correlation between columns in the DataFrame.
sns.heatmap(data=df.corr())
plt.show()
# infant mortality ir birthrate realiai ta pati informacija, tai turėtų būti stipri koreliacija.
# Matome, kad taip ir yra, nes spalva labai panaši į baltą, o balta reiškia corr=1.
TASK: Seaborn can auto perform hierarchal clustering through the clustermap() function. Create a clustermap of the correlations between each column with this function.
sns.clustermap(data=df.corr())
plt.show()
# iškarto galim pažiūrėt hierarchinės klasterizacijos rezultatus ir pamatyti kurie bruožai panašūs.
# matome du didelius klusterius (šviesūs kubai, taip pat matosi iš turnyro schemos)
# ir kad jie priešingi t.y. antikoreliuoja viens su kitu (tamsus kubai)
# pirmasis iš jų susideda iš 'life' bruožų deathrate,birthrate,infant mortality, agriculture
# kitas iš šalies turtingumo įverčių, telefonai,GDP,raštingumas,service
# ir tada daug mažensių klusterių (turnyro schema) susidedančių iš įvairių dalykų
df.isnull().sum()
TASK: What countries have NaN for Agriculture? What is the main aspect of these countries?
df[df['Agriculture'].isnull()]['Country']
TASK: You should have noticed most of these countries are tiny islands, with the exception of Greenland and Western Sahara. Go ahead and fill any of these countries missing NaN values with 0, since they are so small or essentially non-existant. There should be 15 countries in total you do this for. For a hint on how to do this, recall you can do the following:
df[df['feature'].isnull()]
# nunulinam visas na reikšmes, ne tik agriculture stulpelyje, nes pamatėme, kad
# ten kur agriculture na, ten mažos salos kurios neturės daugelių įverčių ir kituose stulpeliuose(educated guess)
df[df['Agriculture'].isnull()] = df[df['Agriculture'].isnull()].fillna(0)
TASK: Now check to see what is still missing by counting number of missing elements again per feature:
df.isnull().sum()
# matome, kad ir kituose stulpeliuose pasinaikino nemažai reikšmių, kas yra gerai.
# čia buvo greitas fixas tų mažų salų probelmai
# aišku galima gilintis ir geriau išspręsti (ne visos ten mažos sąlos)
TASK: Notice climate is missing for a few countries, but not the Region! Let's use this to our advantage. Fill in the missing Climate values based on the mean climate value for its region.
Hints on how to do this: https://stackoverflow.com/questions/19966018/pandas-filling-missing-values-by-mean-in-each-group
df.groupby('Region')['Climate'].mean()
# Visos Climate stulpelio reikšmės perrašytos regiono climate vidurkiu
df.groupby('Region')['Climate'].transform('mean')
# tas pats tik su lambda funkcija:
# df.groupby('Region')['Climate'].transform(lambda val: val.mean())
# val yra Climate stulpelis
# visos Climate stulpelio reikšmės, bet perrašytos regiono climato vidyrkiu tik tos kur buvo na/null
df.groupby('Region')['Climate'].transform(lambda val: val.fillna(val.mean()))
# df[df['Climate'].isnull()]['Climate'] = naujos reikšmės - neveiks, nes filtruojant gaunam slice ir tada
# tam slice stulpeliui priskiriam, originalus df nepasikeis. Taip daryti galima, jeigu papildomai .loc kviečiam
# df.loc[df.['Climate'].isnull(), 'Climate'] = df.groupby ...
# Galima tiesiogiai priskirinėti df['Climate'] = df['Climate'].fillna(funkcija)
# nes fillna pakeičia na reikšmes, bet gražina VISAS stulpelio reikšmes
# ir jos funkcija turi grąžinti visas reikšmes, o ne tik tas kur keisim t.y. kur na
# nes pati fillna funkcija tuo pasirūpina
df['Climate'] = df['Climate'].fillna(df.groupby('Region')['Climate'].transform('mean'))
# arba kaip anksčiau darėm:
# df['Climate'] = df.groupby('Region')['Climate'].transform(lambda val: val.fillna(val.mean()))
# Taigi galim stulpeliui priskirti visas reikšmes,bet su pakeistom na kaip anksčiau
# arba kaip šiuo atveju padarėm, pakeisti visas ir jas paduoti fillna, kuri pakeis tik tas kur na.
TASK: Check again on many elements are missing:
df.isnull().sum()
TASK: It looks like Literacy percentage is missing. Use the same tactic as we did with Climate missing values and fill in any missing Literacy % values with the mean Literacy % of the Region.
df['Literacy (%)'] = df['Literacy (%)'].fillna(df.groupby('Region')['Literacy (%)'].transform('mean'))
TASK: Check again on the remaining missing values:
df.isnull().sum()
TASK: Optional: We are now missing values for only a few countries. Go ahead and drop these countries OR feel free to fill in these last few remaining values with any preferred methodology. For simplicity, we will drop these.
len(df)
df = df.dropna()
len(df) # išmetėm 6 šalis. Not a big deal for culstering.
TASK: It is now time to prepare the data for clustering. The Country column is still a unique identifier string, so it won't be useful for clustering, since its unique for each point. Go ahead and drop this Country column.
# šalies pavadinimas nėra bruožas, tai tiesiog labelis
X = df.drop('Country',axis=1)
TASK: Now let's create the X array of features, the Region column is still categorical strings, use Pandas to create dummy variables from this column to create a finalzed X matrix of continuous features along with the dummy variables for the Regions.
X = pd.get_dummies(X)
X.head()
TASK: Due to some measurements being in terms of percentages and other metrics being total counts (population), we should scale this data first. Use Sklearn to scale the X feature matrics.
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaled_X = scaler.fit_transform(X)
scaled_X
TASK: Use a for loop to create and fit multiple KMeans models, testing from K=2-30 clusters. Keep track of the Sum of Squared Distances for each K value, then plot this out to create an "elbow" plot of K versus SSD. Optional: You may also want to create a bar plot showing the SSD difference from the previous cluster.
from sklearn.cluster import KMeans
ssd = []
# daug iteracijų gali kainuoti daug laiko
for k in range(2,30):
model = KMeans(n_clusters=k)
model.fit(scaled_X)
ssd.append(model.inertia_)
plt.plot(range(2,30),ssd,'o--')
plt.show()
pd.Series(ssd).diff().plot(kind='bar')
plt.show()
# iš abiejų grafikų matome, kad kai K=3(barplote indeksas 2) sumažėja mažėjimas.
# Po to kitas didelis sumažėjimas kai K=16
# Nėra teisingo atsakymo, galima į 3 klasterius suskirstyti, galima į 16.
# Realiai reiktų pasirinkti kuri K reikšmė domina ir toliau daryti analizę.
# barplote grynai ieškom mažų stulpelių (nes jie rodo mažus skirtumus, t.y. atvaizduoti diff()/skirtumai)
# ir tada indeksas+1 = K reikšmė
TASK: What K value do you think is a good choice? Are there multiple reasonable choices? What features are helping define these cluster choices. As this is unsupervised learning, there is no 100% correct answer here. Please feel free to jump to the solutions for a full discussion on this!.
# Nothing to really code here, but choose a K value and see what features
# are most correlated to belonging to a particular cluster!
# Remember, there is no 100% correct answer here!
One could say that there is a significant drop off in SSD difference at K=3 (although we can see it continues to drop off past this). What would an analysis look like for K=3? Let's explore which features are important in the decision of 3 clusters!
model = KMeans(n_clusters=3)
model.fit(scaled_X)
model.labels_
X['Cluster'] = model.labels_
X.corr()['Cluster'].iloc[:-1].sort_values()
X.corr()['Cluster'].iloc[:-1].sort_values().plot(kind='bar')
plt.show()
# matome, kad vienas klusteris gali būti interpretuotas kaip turtingos (gdp, wester_europe) šalys
# kitas klasteris, kurio stiprūs bruožai yra (Latin_AMERIKA,birthrates)
# geriausia suprasti kaip čia klasterizuojama atlikus bonus užduoti. Žr. toliau
The best way to interpret this model is through visualizing the clusters of countries on a map! NOTE: THIS IS A BONUS SECTION. YOU MAY WANT TO JUMP TO THE SOLUTIONS LECTURE FOR A FULL GUIDE, SINCE WE WILL COVER TOPICS NOT PREVIOUSLY DISCUSSED AND BE HAVING A NUANCED DISCUSSION ON PERFORMANCE!
IF YOU GET STUCK, PLEASE CHECK OUT THE SOLUTIONS LECTURE. AS THIS IS OPTIONAL AND COVERS MANY TOPICS NOT SHOWN IN ANY PREVIOUS LECTURE
TASK: Create cluster labels for a chosen K value. Based on the solutions, we believe either K=3 or K=15 are reasonable choices. But feel free to choose differently and explore.
TASK: Let's put you in the real world! Your boss just asked you to plot out these clusters on a country level choropleth map, can you figure out how to do this? We won't step by step guide you at all on this, just show you an example result. You'll need to do the following:
Figure out how to install plotly library: https://plotly.com/python/getting-started/
Figure out how to create a geographical choropleth map using plotly: https://plotly.com/python/choropleth-maps/#using-builtin-country-and-state-geometries
You will need ISO Codes for this. Either use the wikipedia page, or use our provided file for this: "../DATA/country_iso_codes.csv"
Combine the cluster labels, ISO Codes, and Country Names to create a world map plot with plotly given what you learned in Step 1 and Step 2.
Note: This is meant to be a more realistic project, where you have a clear objective of what you need to create and accomplish and the necessary online documentation. It's up to you to piece everything together to figure it out! If you get stuck, no worries! Check out the solution lecture.
!pip3 install plotly==5.9.0
!pip3 install --upgrade pip
import plotly.express as px
iso_codes = pd.read_csv('inp_files/country_iso_codes.csv')
iso_codes
iso_map = iso_codes.set_index('Country')['ISO Code'].to_dict()
# print partial view of dict
{k: v for i, (k, v) in enumerate(iso_map.items()) if i < 10}
#another way: list(iso_map.items())[:10]
# making another column from country column
df['iso code'] = df['Country'].map(iso_map) # some will be Nan because their not in the map
#also adding cluster column
df['cluster'] = model.labels_
df.head()
# reikia paleisti jupyter-notebook su padidintu io_data_rate_limit'u
# kitaip rodys error'a, kad neužtenka defaultinio limito displayint žemėlapį
# jupyter-notebook --NotebookApp.iopub_data_rate_limit=1.0e10
# arba upgradint notebook'o versiją į >=5.2.2
fig = px.choropleth(df, locations="iso code", # needs country iso codes
color="cluster", # color by cluster column
hover_name="Country", # column to add to hover information
)
fig.show()
# matom, kad 3 klusteriai yra - modernios šalys, afrika ir the rest.
fig.write_html("countries_plot.html")